在过去的十年中,我们看到了社交媒体平台推动的在线内容中的指数增长。该规模的数据生成具有难以克服的攻击内容的警告。通过多种方式(图像,语言等),代码混合语言等,通过使用识别冒犯内容的复杂性加剧了。此外,即使我们仔细采样和注释令人反感的内容,也将始终存在攻击性VS非冒犯内容的显着类别不平衡。在本文中,我们介绍了一种基于新的Code-Mixing指数(CMI)的焦点损失,其避免了两个挑战(1)代码混合语言(2)类别不平衡问题,用于Dravidian语言冒犯检测。我们还通过基于余弦的分类器更换传统的小点产品类分类器,这导致性能提升。此外,我们使用多语言模型,帮助传输特征在跨语言中学到的,以有效地使用低资源语言。同样重要的是要注意我们的模型处理混合脚本的实例(例如,说拉丁和Dravidian - 泰米尔脚本脚本的使用)也是如此。我们的模型可以在低资源,类别不平衡,多语言和代码混合设置中处理令人反感的语言检测。
translated by 谷歌翻译
Purpose: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. Methods: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. Results: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below 1{\deg}. We further illustrate how TAToo may be used in a surgical navigation setting. Conclusion: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.
translated by 谷歌翻译
Recent advances in deep learning research, such as transformers, have bolstered the ability for automated agents to generate creative texts similar to those that a human would write. By default, transformer decoders can only generate new text with respect to previously generated text. The output distribution of candidate tokens at any position is conditioned on previously selected tokens using a self-attention mechanism to emulate the property of autoregression. This is inherently limiting for tasks such as controllable story generation where it may be necessary to condition on future plot events when writing a story. In this work, we propose Future Sight, a method for finetuning a pretrained generative transformer on the task of future conditioning. Transformer decoders are typically pretrained on the task of completing a context, one token at a time, by means of self-attention. Future Sight additionally enables a decoder to attend to an encoded future plot event. This motivates the decoder to expand on the context in a way that logically concludes with the provided future. During inference, the future plot event can be written by a human author to steer the narrative being generated in a certain direction. We evaluate the efficacy of our approach on a story generation task with human evaluators.
translated by 谷歌翻译
In this paper, we propose and showcase, for the first time, monocular multi-view layout estimation for warehouse racks and shelves. Unlike typical layout estimation methods, MVRackLay estimates multi-layered layouts, wherein each layer corresponds to the layout of a shelf within a rack. Given a sequence of images of a warehouse scene, a dual-headed Convolutional-LSTM architecture outputs segmented racks, the front and the top view layout of each shelf within a rack. With minimal effort, such an output is transformed into a 3D rendering of all racks, shelves and objects on the shelves, giving an accurate 3D depiction of the entire warehouse scene in terms of racks, shelves and the number of objects on each shelf. MVRackLay generalizes to a diverse set of warehouse scenes with varying number of objects on each shelf, number of shelves and in the presence of other such racks in the background. Further, MVRackLay shows superior performance vis-a-vis its single view counterpart, RackLay, in layout accuracy, quantized in terms of the mean IoU and mAP metrics. We also showcase a multi-view stitching of the 3D layouts resulting in a representation of the warehouse scene with respect to a global reference frame akin to a rendering of the scene from a SLAM pipeline. To the best of our knowledge, this is the first such work to portray a 3D rendering of a warehouse scene in terms of its semantic components - Racks, Shelves and Objects - all from a single monocular camera.
translated by 谷歌翻译
在软件开发过程中,开发人员需要回答有关代码语义方面的查询。即使已经用神经方法进行了广泛的自然语言研究,但尚未探索使用神经网络对代码回答语义查询的问题。这主要是因为没有现有的数据集,具有提取性问答和答案对,涉及复杂概念和较长推理的代码。我们通过构建一个名为Codequeries的新的,策划的数据集并提出了一种关于代码的神经问题方法来弥合这一差距。我们基于最先进的预训练的代码模型,以预测答案和支持事实跨度。给定查询和代码,只有一些代码可能与回答查询有关。我们首先在理想的环境下进行实验,其中仅给出了模型的相关代码,并表明我们的模型做得很好。然后,我们在三个务实的考虑因素下进行实验:(1)扩展到大尺寸的代码,(2)从有限数量的示例中学习,(3)代码中对次要语法错误的鲁棒性。我们的结果表明,虽然神经模型可以抵御代码中的次要语法错误,代码的大小增加,与查询无关的代码的存在以及减少的培训示例数量限制了模型性能。我们正在释放数据和模型,以促进未来关于回答代码语义查询的问题的工作。
translated by 谷歌翻译
如今,瑜伽因现代生活方式的压力增加而受到全世界的关注,并且学习瑜伽有很多方法或资源。瑜伽一词意味着思想和身体之间的深厚联系。今天,有大量的医学和科学证据表明,我们大脑活动的基本面,我们的化学甚至可以通过练习不同的瑜伽系统来改变我们的化学。 Suryanamaskar,也被称为“向太阳致敬”,是一种瑜伽练习,结合了八种不同的形式和12个体式(4个Asana重复),专门介绍了印度太阳神Surya。 Suryanamaskar提供了许多健康益处,例如增强肌肉和帮助控制血糖水平。在这里,MediaPipe库用于分析Surya Namaskar的情况。高级软件可以实时检测到站立,因为人们在相机前表演了Surya Namaskar。班级分隔器将该表格识别为以下一项:pranamasana,hasta padasana,hasta uttanasana,ashwa -Sanchalan Asana,Ashtanga Namaskar,Dandasana或Bhujangasana和Svanasana。基于深度学习的技术(CNN)用于开发该模型,模型精度为98.68%,精度得分为0.75,以检测正确的瑜伽(Surya Namaskar)姿势。使用此方法,用户可以练习所需的姿势,并可以检查该人所做的姿势是否正确。它将有助于正确地做Surya Namaskar的所有不同姿势,并提高瑜伽从业者的效率。本文描述了将在模型中实现的整个框架。
translated by 谷歌翻译
我们为梯度下降提供了收敛分析,以解决高斯分布中不可知的问题。与研究零偏差的设置的先前工作不同,我们考虑了当relu函数的偏见非零时更具挑战性的情况。我们的主要结果确定,从随机初始化开始,从多项式迭代梯度下降输出中,具有很高的概率,与最佳relu函数的误差相比,可以实现竞争错误保证。我们还提供有限的样本保证,这些技术将其推广到高斯以外的更广泛的边际分布。
translated by 谷歌翻译
增量学习是一种范式,可以通过流数据大规模构建模型构建和更新。对于端到端的自动语音识别(ASR)任务,缺乏人类注释的标签,以及需要保留模型建设政策的隐私政策,这使其成为艰巨的挑战。受这些挑战的激励,在本文中,我们使用基于云的框架为生产系统展示了从隐私保存自动语音识别(ILASR)的增量学习中的见解。我们的意思是,通过保留隐私性,对没有人类注释的短暂数据使用。该系统是用于增量/持续学习的生产LevelAsASR模型的一步,该模型提供了接近实时测试床,以在云中进行端到端ASR实验,同时遵守保留隐私的政策。我们表明,即使在没有人类注释的标签的情况下,拟议的系统也可以在六个月的新时间内显着改善生产模型(3%),而在增量学习中,较弱的监督和大批量大小。在新时期,这种改进比测试集的新单词和短语相比为20%。我们在ASR的同时进一步探讨了拥有有效的教师模型和使用大批量大小的实用性的同时,以保护隐私的增量方式展示了模型构建的有效性。
translated by 谷歌翻译
在本文中,我们提出了一个自然的单个偏好(IP)稳定性的概念,该概念要求每个数据点平均更接近其自身集群中的点,而不是其他群集中的点。我们的概念可以从几个角度的动机,包括游戏理论和算法公平。我们研究了与我们提出的概念有关的几个问题。我们首先表明,确定给定数据集通常允许进行IP稳定的聚类通常是NP-HARD。结果,我们探索了在某些受限度量空间中查找IP稳定聚类的有效算法的设计。我们提出了一种poly Time算法,以在实际线路上找到满足精确IP稳定性的聚类,并有效地算法来找到针对树度量的IP稳定2聚类。我们还考虑放松稳定性约束,即,与其他任何集群相比,每个数据点都不应太远。在这种情况下,我们提供具有不同保证的多时间算法。我们在实际数据集上评估了一些算法和几种标准聚类方法。
translated by 谷歌翻译
最紧迫的社会问题之一是与虚假新闻的斗争。虚假的主张很难暴露,造成了很多损害。为了解决这个问题,事实验证变得至关重要,因此是不同研究社区中感兴趣的话题。仅使用数据的文本形式,我们建议解决问题的解决方案,并通过其他方法实现竞争结果。我们基于两种方法(基于训练的语言模型)基于两种方法和基于提示的方法提供解决方案。基于PLM的方法使用传统的监督学习,其中训练模型以“ X”为输入和输出预测为P(Y | X)。鉴于,基于及时的学习反映了设计输入以适合模型的想法,以便可以将原始目标重新构成(掩盖)语言建模的问题。我们可能会进一步刺激PLM提供的丰富知识,以通过采用额外提示来微调PLM,以更好地完成下游任务。我们的实验表明,所提出的方法的性能不仅仅是微调PLM。我们在Trancify数据集中获得了0.6946的F1分数,在比赛负责人板上获得了第七名。
translated by 谷歌翻译